134 research outputs found

    A Calibration-and-Error Correction Method for Improved Texel (Fused Ladar/Digital Camera) Images

    Get PDF
    The fusion of imaging ladar information and digital imagery results in 2.5-D surfaces covered with texture information. Called texel images, these datasets, when taken from dierent viewpoints, can be combined to create 3-D images of buildings, vehicles, or other objects. These 3-D images can then be further processed for automatic target recognition, or viewed in a 3-D viewer for tactical planning purposes. This paper presents a procedure for calibration, error correction, and fusing of ladar and digital camera information from a single hand-held sensor to create accurate texel images. A brief description of a prototype sensor is given, along with calibration technique used with the sensor, which is applicable to other imaging ladar/digital image sensor systems. The method combines systematic error correction of the ladar data, correction for lens distortion of the digital camera image, and fusion of the ladar to the camera data in a single process. The result is a texel image acquired directly from the sensor. Examples of the resulting images, with improvements from the proposed algorithm, are presented

    Improved registration for 3D image creation using multiple texel images and incorporating low-cost GPS/INS measurements

    Get PDF
    The creation of 3D imagery is an important topic in remote sensing. Several methods have been developed to create 3D images from fused ladar and digital images, known as texel images. These methods have the advantage of using both the 3D ladar information and the 2D digital imagery directly, since texel images are fused during data acquisition. A weakness of these methods is that they are dependent on correlating feature points in the digital images. This can be dicult when image perspectives are signicantly dierent, leading to low correlation values between matching feature points. This paper presents a method to improve the quality of 3D images created using existing approaches that register multiple texel images. The proposed method incorporates relatively low accuracy measurements of the position and attitude of the texel camera from a low-cost GPS/INS into the registration process. This information can improve the accuracy and robustness of the registered texel images over methods based on point-cloud merging or image registration alone. In addition, the dependence on feature point correlation is eliminated. Examples illustrate the value of this method for signicant image perspective dierences

    Multi-rate, real time image compression for images dominated by point sources

    Get PDF
    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only

    Calibration Method for Texel Images Created from Fused Lidar and Digital Camera Images

    Get PDF
    The fusion of imaging lidar information and digital imagery results in 2.5-dimensional surfaces covered with texture information, called texel images. These data sets, when taken from different viewpoints, can be combined to create three-dimensional (3-D) images of buildings, vehicles, or other objects. This paper presents a procedure for calibration, error correction, and fusing of flash lidar and digital camera information from a single sensor configuration to create accurate texel images. A brief description of a prototype sensor is given, along with a calibration technique used with the sensor, which is applicable to other flash lidar/digital image sensor systems. The method combines systematic error correction of the flash lidar data, correction for lens distortion of the digital camera and flash lidar images, and fusion of the lidar to the camera data in a single process. The result is a texel image acquired directly from the sensor. Examples of the resulting images, with improvements from the proposed algorithm, are presented. Results with the prototype sensor show very good match between 3-D points and the digital image (\u3c 2.8 image pixels), with a 3-D object measurement error of \u3c 0.5%, compared to a noncalibrated error of ∼3%

    Rate-distortion adaptive vector quantization for wavelet imagecoding

    Get PDF
    We propose a wavelet image coding scheme using rate-distortion adaptive tree-structured residual vector quantization. Wavelet transform coefficient coding is based on the pyramid hierarchy (zero-tree), but rather than determining the zero-tree relation from the coarsest subband to the finest by hard thresholding, the prediction in our scheme is achieved by rate-distortion optimization with adaptive vector quantization on the wavelet coefficients from the finest subband to the coarsest. The proposed method involves only integer operations and can be implemented with very low computational complexity. The preliminary experiments have shown some encouraging results: a PSNR of 30.93 dB is obtained at 0.174 bpp on the test image LENA (512×512

    Classification using set-valued Kalman filtering and Levi\u27s decision theory

    Get PDF
    We consider the problem of using Levi\u27s expected epistemic decision theory for classification when the hypotheses are of different informational values, conditioned on convex sets obtained from a set-valued Kalman filter. The background of epistemic utility decision theory with convex probabilities is outlined and a brief introduction to set-valued estimation is given. The decision theory is applied to a classifier in a multiple-target tracking scenario. A new probability density, appropriate for classification using the ratio of intensities, is introduced

    Rate-Distortion Optimized Vector SPIHT for Wavelet Image Coding

    Get PDF
    In this paper, a novel image coding scheme using rate-distortion optimized vector quantization of wavelet coefficients is presented. A vector set partitioning algorithm is used to locate significant wavelet vectors which are classified into a number of classes based on their energies, thus reducing the complexity of the vector quantization. The set partitioning bits are reused to indicate the vector classification indices to save the bits for coding of the classification overhead. A set of codebooks with different sizes is designed for each class of vectors, and a Lagrangian optimization algorithm is employed to select an optimal codebook for each vector. The proposed coding scheme is capable of trading off between the number of bits used to code each vector and the corresponding distortion. Experimental results show that our proposed method outperforms other zerotree-structured embedded wavelet coding schemes such as SPIHT and SFQ, and is competitive with JPEG2000

    Range Resolution Improvement of Eyesafe Ladar Testbed (ELT) Measurements Using Sparse Signal Deconvolution

    Get PDF
    The Eyesafe Ladar Test-bed (ELT) is a experimental ladar system with the capability of digitizing return laser pulse waveforms at 2 GHz. These waveforms can then be exploited o-line in the laboratory to develop signal processing techniques for noise reduction, range resolution improvement, and range discrimination between two surfaces of similar range interrogated by a single laser pulse. This paper presents the results of experiments with new deconvolution algorithms with the hoped for gains of improving the range discrimination of the ladar system. The sparsity of ladar returns is exploited to solve the deconvolution problem in two steps. The rst step is to estimate a point target response using a database of measured calibration data. This basic target response is used to construct a dictionary of target responses with dierent delays/ranges. Using this dictionary ladar returns from a wide variety of surface congurations can be synthesized by taking linear combinations. A sparse linear combination matches the physical reality that ladar returns consist of the overlapping of only a few pulses. The dictionary construction process is a pre-processing step that is performed only once. The deconvolution step is performed by minimizing the error between the measured ladar return and the dictionary model while constraining the coecient vector to be sparse. Other constraints such as the non-negativity of the coecients are also applied. The results of the proposed technique are presented in the paper and are shown to compare favorably with previously investigated deconvolution techniques

    Automatic Merging of Lidar Point-Clouds Using Data from Low-Cost GPS/IMU Systems

    Get PDF
    Stationary lidar (Light Detection and Ranging) systems are often used to collect 3-D data (point clouds) that can be used for terrain modelling. The lidar gathers scans which are then merged together to map a terrain. Typically this is done using a variant of the well-known Iterated Closest Point (ICP) algorithm when position and pose of the lidar scanner is not accurately known. One difficulty with the ICP algorithms is that they can give poor results when points that are not common to both scans (outliers) are matched together. With the advent of MEMS (microelectromechanical systems)-based GPS/IMU systems, it is possible to gather coarse position and pose information at a low cost. This information is not accurate enough to merge point clouds directly, but can be used to assist the ICP algorithm during the merging process. This paper presents a method called Sphere Outlier Removal (SOR), which accurately identifies outliers and inliers, a necessary prerequisite to using the ICP algorithm. SOR incorporates the information from a low cost GPS/IMU to perform this identification. Examples are presented which illustrate the improvement in the accuracy of merged point clouds when the SOR algorithm is used

    A fast full-search adaptive vector quantizer for video coding

    Get PDF
    This paper presents a novel VQ structure which provides very good quality encoding for video sequences and exploits the computational savings gained from a fast-search algorithm. It uses an adaptive-search, variable-length encoding method which allows for very fast matching of a wide range of transmission rates. Both the encoding quality and the computational benefits from the fast-search algorithm are presented. Simulations show that full-search tree residual VQ (FTRVQ) can provide up to 3 dB improvement over a similar RVQ encoder on video sequences
    • …
    corecore